To solve the problem that the direct use of high-dimensional, high-frequency, noise-containing real-world data to perform data processing leads to unreliable estimators, a data uncertainty quantification method based on Generative Adversarial Network (GAN) was proposed. Firstly, the original data distribution was reconstructed by GAN to construct a mapping distribution from the noise space to the space of the original data. Secondly, the samples were extracted by Markov Chain Monte Carlo (MCMC) method to obtain new samples based on the original data distribution. Thirdly, confidence intervals for the uncertainty of the samples were defined based on the specified functions. Finally, the confidence intervals were used to estimate the uncertainty of the original data, and within the data the confidence intervals was selected as the data used by the estimator. Experimental results show that 50% fewer samples are required to train the estimator to reach the upper limit by using the data within the confidence intervals compared to the samples required by using the original data. At the same time, compared to the original data, the data within the confidence intervals requires 30% fewer samples on average to achieve the same test accuracy.
Aiming at the problem that the scale of integrated circuits and the number of on-chip registers are increasing, which makes the verification more difficult, a lightweight register model was proposed. Firstly, a concise underlying structure was designed, and parameterized settings were combined to reduce the memory consumption of the register model at runtime. Then the register verification requirements at different levels such as module level and system level were analyzed, and SystemVerilog language was used to implement various functions required for verification. Finally, the built-in test cases and register model automatic generation tools were developed to reduce the setup time of the verification environment in which the register model was located. The experimental results show that the proposed register model is 21.65% of the Universal Verification Methodology (UVM) register model in term of memory consumption at runtime; in term of function, the proposed register model can be applied to traditional UVM verification environments and non-UVM verification environments, and the functions such as read-write property, reset value and backdoor access path of 25 types of registers are checked. This lightweight register model has good universality and flexibility in engineering practice, meets the needs of register verification, and can effectively improve the efficiency of register verification.
Aimed at the problem that existing RFID (Radio Frequency Identification) group proof protocols are inefficient and easily encounter many attacks like replay, tracking and so on, this paper proposed a new group proof protocol based on secret key-sharing tree. This protocol designed a new secret group-proofing key construction based on secret key sharing scheme. The group-proofing key was divided many times into many sub-keys to creat a key tree. This method increased the complexity of the construction of the secret key, increased the difficulty of that attackers attempt to recover the group key and increased the security of tag's group proof. The reader interacts with each tag only once to authenticate its validity and collect the group-proof information. This protocol enormously increases the proof efficiency. Compared to the existing protocols such as Yoking-Proofs, ECC-based and Tree-based, this protocol has better security and higher efficiency.
In current multi-hop unidirectional identity-based proxy re-encryption schemes, the ciphertext length increases with the number of hops, which leads to the reduction of efficiency. To solve this issue, a new multi-hop unidirectional identity-based proxy re-encryption scheme was designed by changing the re-encryption key generation side. The re-encryption keys were generated by the sender. In the scheme, the first-level and second-level ciphertexts were of the same pattern, and the length of the re-encrypted ciphertext remained unchanged. The efficiency analysis shows that the proposed scheme reduces the numbers of exponent, multiplication, and bilinear pairing computations. The new scheme has been proved to be chosen-ciphertext attack secure in the random oracle model based on the Decisional Bilinear Diffie-Hellman (DBDH) assumption.